-
Notifications
You must be signed in to change notification settings - Fork 22
don't use CIRCLE_WORKFLOW_WORKSPACE_ID for parallel nonce anymore #64
base: master
Are you sure you want to change the base?
Conversation
…t unique between retries
|
I've done some manual testing and it seems like CIRCLE_WORKFLOW_WORKSPACE_ID is always preserved on retried jobs, while the BUILD_NUM is new. This seems to resolve the issues, at least within our project... I haven't been able to find any useful supporting documentation, however. |
|
Any chance anyone could take a look at this? It's pretty disruptive to have to create dummy commits whenever we want to just retry a circleCI job. Thanks! |
|
hey @caffodian thanks for the report -- I can give you a couple workarounds here, but we'll need to debate a bit more internally about actually switching this across all our SDKs (see some history in #45 (comment)) Workaround 1: set the Workaround 2: use our CircleCI orb and "finalize all" mode: https://docs.percy.io/docs/circleci#section-parallelized-tests-configuring-your-circleci-orb |
|
Thanks for the response. I saw some of the debate, but it seemed that perhaps the wrong conclusions were drawn... or maybe the rebuild case wasn't considered? I'm also quite frustrated at the lack of documentation on what the correct env variable would be.
This second workaround doesn't actually fully work - if the build was finalized, and then you retry it, all snapshotting tests will still get a 409, because a new build number is still not generated. The first workaround might work, but in our case would need some fudging because we'd ideally want to keep our script runner the same, but run it with different settings through the circleci config. Then because you can't really create new env variables that reference the circleci built in ones, it gets really messy. I could possibly manually set this env variable for all jobs that actually do things with Percy, but it's a bit gross of a workaround. I'll think more on this. thanks again! Hope there's a better solution in the future |
|
Bump - do you have any advice on why the orb didn't actually fix my issue above? |
|
👋 Hey @caffodian just noticing this PR! If this is still an issue (I hope not!) the circle env var you'll want to use is |
|
If you wanted to rebase this PR, put the code back for |
Thanks! I'll give it a try with our setup first - we use a parallel nose setup on circleci so it'll fail pretty loudly if this doesn't behave the same. |
|
I just gave it a check, and this doesn't actually work. In fact, with the parallel setup, this is actually worse. It seems that:
Therefore, using it as the parallel nonce has the effect where (if you have that setup,) only the first of a job within the workflow can actually complete. In our case, we have two Selenium jobs within the same workflow, each with a paralleism of 4 - once the first parallel job finishes, none of the others can write snapshots, getting conflict errors etc. |
Hmm, this suggests the parallel total is incorrect. What is |
First off - thanks for the integration. Percy has been super useful to us and has helped us be more proactive in finding visual differences.
I've been dealing with a maddening issue for a few weeks and I think I have a solution...
When running tests in parallel on CircleCI (such as using the nose-parallel plugin,) retrying a job will result in failures such as follows:
HTTPError: 409 Client Error: Conflict for url: https://percy.io/api/v1/builds/0000000/snapshots/or
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: https://percy.io/api/v1/builds/0000000/finalizeIt appears that in retries,
CIRCLE_WORKFLOW_WORKSPACE_IDdoes not actually change. I don't think we should be using it as a parallel nonce.Without this fix, when we retry a selenium job on circleci, all our percy snapshots will fail. We have to do a strange workaround of making an extra commit just to get the snapshots to be submitted properly.
thanks!